14 research outputs found

    Automated verification of termination certificates

    Get PDF
    In order to increase user confidence, many automated theorem provers provide certificates that can be independently verified. In this paper, we report on our progress in developing a standalone tool for checking the correctness of certificates for the termination of term rewrite systems, and formally proving its correctness in the proof assistant Coq. To this end, we use the extraction mechanism of Coq and the library on rewriting theory and termination called CoLoR

    Translating BNGL models into Kappa our experience

    Get PDF
    Extended abstractSo as to test the Kappa development tools on more examples, we translated the models provided with the BNGL distribution, into Kappa. In this talk, we report about our experience. The translation was quite straight-forward except for few interesting issues that we detail here. Firstly the use of static analysis has exposed some glitches in the modelling of some pathways in the models of the BNGL distribution. We explain how static analysis has helped us to detect, locate, and correct these flaws. Secondly, expanding BNGL rules using equivalent sites into rules with uniquely identified sites is not so easy when one wants to preserve faithfully the kinetics of interactions. We recall the semantics of BNGL for equivalent sites, and explain how to perform such translation

    Reachability analysis via orthogonal sets of patterns

    Get PDF
    International audienceRule-based modelling languages, such as Kappa, allow for the description of very detailed mechanistic models. Yet, as the rules become more and more numerous, there is a need for formal methods to enhance the level of confidence in the models that are described with these languages.We develop abstract interpretation tools to capture invariants about the biochemical structure of the bio-molecular species that may occur in a given model. In previous works, we have focused on the relationships between the states of the sites that belong to the same instance of a protein. This comes down to detect for a specific set of patterns, which ones may be reachable during the execution of the model. In this paper, we generalise this approach to a broader family of abstract domains that we call orthogonal sets of patterns. More precisely, an orthogonal set of patterns is obtained by refining recursively the information about some patterns containing a given protein, so as to partition the set of occurrences of this protein in any mixture. We show that orthogonal sets of patterns offer a convenient choice to design scalable and accurate static analyses. As an example, we use them to infer properties in models with transport of molecules (more precisely, we show that each pair of proteins that are connected, always belong to the same compartment), and models involving double bindings (we show that whenever a protein of type A is bound twice to proteins of type B, then the protein A is necessarily bound twice to the same instance of the protein B)

    Local traces: an over-approximation of the behaviour of the proteins in rule-based models

    Get PDF
    International audienceThanks to rule-based modelling languages, we can assemble large sets of mechanistic protein-protein interactions within integrated models. Our goal would be to understand how the behaviour of these systems emerges from these low-level interactions. Yet this is a quite long term challenge and it is desirable to offer intermediary levels of abstraction, so as to get a better understanding of the models and to increase our confidence within our mechanistic assumptions. In this paper, we propose an abstract interpretation of the behaviour of each protein, in isolation. Given a model written in Kappa, this abstraction computes for each kind of protein a transition system that describes which conformations this protein can take and how a protein can pass from one conformation to another one. Then, we use simplicial complexes to abstract away the interleaving order of the transformations between conformations that commute. As a result, we get a compact summary of the potential behaviour of each protein of the model

    STUDY ON THE EFFECT OF CALCIUM-ALGINATE AND WHEY PROTEIN ON THE SURVIVAL RATE OF Bifidobacterium bifidum IN MAYONNAISE

    Get PDF
    ABSTRACT – QMFS 2019The functional food development by adding probiotic bacteria is getting a lot of concern. In this study, Bifidobacterium bifidum AS 1.1886 was encapsulated in calcium-alginate 2% w/v (C sample) or the mix of calcium-alginate 2% (w/v) and whey protein 1% (w/v) (CW sample) or calcium-alginate 2% (w/v) coated by whey protein 1% (w/v) (CcW sample) by extrusion method, and added to mayonnaise product. The pH changes, the survival rate of probiotic bacteria, and total yeast and mold count during storage, as well as the probiotic survival in simulated gastric medium, were evaluated. The result showed that the pH changes were not significantly different in all mayonnaise samples in this test. The viability of the free probiotic cell was significant decrease about 5.85 log CFU/g compared to 0.26 ÷ 1.14 log CFU/g in encapsulated cell samples after four weeks of storage. None of the free cells survived after six weeks of storage. The total yeast and mold count in samples related to the probiotic count, the viability of probiotic cells higher 6 log CFU/g might be controlling the growth of yeast and molds in mayonnaise. Whey protein has been shown to significantly improve the survival rate of B.bifidum and calcium-alginate coated by whey protein, indicating the most effective protection. The result showed that the application potential of encapsulated probiotic in mayonnaise product

    Effect of Plumbagin on growth inhibition and apoptosis of imatinib-resistant chronic myeloid leukemia

    Get PDF
    Development of a new inhibitor of BCR-ABL tyrosine kinase is necessary for the treatment of chronic myeloid leukemia (CML) because of increasing resistance and tolerance to Imatinid efforts. Herein, we reported Plumbagin can significantly inhibit the growth of CML. The results revealed that Plumbagin inhitbited TCCY and TCCY/T315I cells with IC50 values 3 μM and 2.1 μM, respectively. Plumbagin also showed anti-proliferative effects on both the wide type Ba/F3 and the BCR-ABL-transfected Ba/F3 cells with a range of IC50 from 3.2 to 3.8 μM. In addition, Plumbagin induced the apoptosis of CML cells. That would provide a new and potential drug as a chemotherapy medication in the treatment of Imatinid -resistant CML

    Finishing the euchromatic sequence of the human genome

    Get PDF
    The sequence of the human genome encodes the genetic instructions for human physiology, as well as rich information about human evolution. In 2001, the International Human Genome Sequencing Consortium reported a draft sequence of the euchromatic portion of the human genome. Since then, the international collaboration has worked to convert this draft into a genome sequence with high accuracy and nearly complete coverage. Here, we report the result of this finishing process. The current genome sequence (Build 35) contains 2.85 billion nucleotides interrupted by only 341 gaps. It covers ∼99% of the euchromatic genome and is accurate to an error rate of ∼1 event per 100,000 bases. Many of the remaining euchromatic gaps are associated with segmental duplications and will require focused work with new methods. The near-complete sequence, the first for a vertebrate, greatly improves the precision of biological analyses of the human genome including studies of gene number, birth and death. Notably, the human enome seems to encode only 20,000-25,000 protein-coding genes. The genome sequence reported here should serve as a firm foundation for biomedical research in the decades ahead

    Safety and efficacy of fluoxetine on functional outcome after acute stroke (AFFINITY): a randomised, double-blind, placebo-controlled trial

    Get PDF
    Background Trials of fluoxetine for recovery after stroke report conflicting results. The Assessment oF FluoxetINe In sTroke recoverY (AFFINITY) trial aimed to show if daily oral fluoxetine for 6 months after stroke improves functional outcome in an ethnically diverse population. Methods AFFINITY was a randomised, parallel-group, double-blind, placebo-controlled trial done in 43 hospital stroke units in Australia (n=29), New Zealand (four), and Vietnam (ten). Eligible patients were adults (aged ≥18 years) with a clinical diagnosis of acute stroke in the previous 2–15 days, brain imaging consistent with ischaemic or haemorrhagic stroke, and a persisting neurological deficit that produced a modified Rankin Scale (mRS) score of 1 or more. Patients were randomly assigned 1:1 via a web-based system using a minimisation algorithm to once daily, oral fluoxetine 20 mg capsules or matching placebo for 6 months. Patients, carers, investigators, and outcome assessors were masked to the treatment allocation. The primary outcome was functional status, measured by the mRS, at 6 months. The primary analysis was an ordinal logistic regression of the mRS at 6 months, adjusted for minimisation variables. Primary and safety analyses were done according to the patient's treatment allocation. The trial is registered with the Australian New Zealand Clinical Trials Registry, ACTRN12611000774921. Findings Between Jan 11, 2013, and June 30, 2019, 1280 patients were recruited in Australia (n=532), New Zealand (n=42), and Vietnam (n=706), of whom 642 were randomly assigned to fluoxetine and 638 were randomly assigned to placebo. Mean duration of trial treatment was 167 days (SD 48·1). At 6 months, mRS data were available in 624 (97%) patients in the fluoxetine group and 632 (99%) in the placebo group. The distribution of mRS categories was similar in the fluoxetine and placebo groups (adjusted common odds ratio 0·94, 95% CI 0·76–1·15; p=0·53). Compared with patients in the placebo group, patients in the fluoxetine group had more falls (20 [3%] vs seven [1%]; p=0·018), bone fractures (19 [3%] vs six [1%]; p=0·014), and epileptic seizures (ten [2%] vs two [<1%]; p=0·038) at 6 months. Interpretation Oral fluoxetine 20 mg daily for 6 months after acute stroke did not improve functional outcome and increased the risk of falls, bone fractures, and epileptic seizures. These results do not support the use of fluoxetine to improve functional outcome after stroke

    Vérification automatisée de certificats de terminaison

    No full text
    Making sure that a computer program behaves as expected, especially in critical applications (health, transport, energy, communications, etc.), is more and more important, all the more so since computer programs become more and more ubiquitous and essential to the functioning of modern societies. But how to check that a program behaves as expected, in particular when the range of its inputs is very large or potentially infinite? In this work, we explain the development of a new, faster and formally proved version of Rainbow based on the extraction mechanism of Coq. The previous version of Rainbow verified a CPF le in two steps. First, it used a non-certified OCaml program to translate a CPF file into a Coq script, using the Coq libraries on rewriting theory and termination CoLoR and Coccinelle. Second, it called Coq to check the correctness of the script. This approach is interesting for it provides a way to reuse in Coq termination proofs generated by external tools. This is also the approach followed by CiME3. However, it suffers from a number of deficiencies. First, because in Coq functions are interpreted, computation is much slower than with programs written in a standard programming language and compiled into binary code. Second, because the translation from CPF to Coq is not certified, it may contain errors and either lead to the rejection of valid certificates, or to the acceptance of wrong certificates. To solve the latter problem, one needs to define and formally prove the correctness of a function checking whether a certificate is valid or not. To solve the former problem, one needs to compile this function to binary code. The present work shows how to solve these two problems by using the proof assistant Coq and its extraction mechanism to the programming language OCaml. Indeed, data structures and functions de fined in Coq can be translated to OCaml and then compiled to binary code by using the OCaml compiler. A similar approach was first initiated in CeTA using the Isabelle proof assistant.S'assurer qu'un programme informatique se comporte bien, surtout dans des applications critiques (santé, transport, énergie, communications, etc.) est de plus en plus important car les ordinateurs et programmes informatiques sont de plus en plus omniprésents, voir essentiel au bon fonctionnement de la société. Mais comment vérifier qu'un programme se comporte comme prévu, quand les informations qu'il prend en entrée sont de très grande taille, voire de taille non bornée a priori ? Pour exprimer avec exactitude ce qu'est le comportement d'un programme, il est d'abord nécessaire d'utiliser un langage logique formel. Cependant, comme l'a montré Gödel dans, dans tout système formel suffisamment riche pour faire de l'arithmétique, il y a des formules valides qui ne peuvent pas être prouvées. Donc il n'y a pas de programme qui puisse décider si toute propriété est vraie ou fausse. Cependant, il est possible d'écrire un programme qui puisse vérifier la correction d'une preuve. Ce travail utilisera justement un tel programme, Coq, pour formellement vérifier la correction d'un certain programme. Dans cette thèse, nous expliquons le développement d'une nouvelle version de Rainbow, plus rapide et plus sûre, basée sur le mécanisme d'extraction de Coq. La version précédente de Rainbow vérifiait un certificat en deux étapes. Premièrement, elle utilisait un programme OCaml non certifié pour traduire un fichier CPF en un script Coq, en utilisant la bibliothèque Coq sur la théorie de la réécriture et la terminaison appelée CoLoR. Deuxièmement, elle appelait Coq pour vérifier la correction du script ainsi généré. Cette approche est intéressante car elle fournit un moyen de réutiliser dans Coq des preuves de terminaison générée par des outils extérieurs à Coq. C'est également l'approche suivie par CiME3. Mais cette approche a aussi plusieurs désavantages. Premièrement, comme dans Coq les fonctions sont interprétées, les calculs sont beaucoup plus lents qu'avec un langage où les programmes sont compilés vers du code binaire exécutable. Deuxièmement, la traduction de CPF dans Coq peut être erronée et conduire au rejet de certificats valides ou à l'acceptation de certificats invalides. Pour résoudre ce deuxième problème, il est nécessaire de définir et prouver formellement la correction de la fonction vérifiant si un certificat est valide ou non. Et pour résoudre le premier problème, il est nécessaire de compiler cette fonction vers du code binaire exécutable. Cette thèse montre comment résoudre ces deux problèmes en utilisant l'assistant à la preuve Coq et son mécanisme d'extraction vers le langage de programmation OCaml. En effet, les structures de données et fonctions définies dans Coq peuvent être traduits dans OCaml et compilées en code binaire exécutable par le compilateur OCaml. Une approche similaire est suivie par CeTA en utilisant l'assistant à la preuve Isabelle et le langage Haskell

    Application of Fe3O4/thiamine Magnetic Particles in the Removal of Methylene Blue

    No full text
    Fe3O4/thiamine particles were prepared in this work via precipitation method. The synthesis method is based on the principle of precipitation of Fe3O4 particles in the presence of thiamine coating agent. Also, the potential application of Fe3O4/ thiamine in the removal of methylene blue (MB) was investigated. Several factors that affect the synthesis of Fe3O4/thiamine such as base concentration, mass ratio of FeCl2 to thiamine, reaction temperature, and reaction time were determined. Optimal conditions for preparing Fe3O4/thiamine are NH4OH concentration = 10%, mass ratio of FeCl2:thiamine = 5:1 (g g-1), reaction temperature = 30 °C, reaction time = 120 min. The average particle size of Fe3O4/thiamine is 293.7 nm while the specific surface area, pore diameter, and magnetization of the obtained Fe3O4/ thiamine particles are 57 m2 g-1, 192.67 Å, and 2.4 emu g-1, respectively. The interesting point of this work is to obtain the Fe3O4/thiamine at low temperature with less amount of NH4OH used. Furthermore, 79.08% of MB could be removed using Fe3O4/thiamine as an adsorbent, with a maximum adsorption capacity of 31.63 mg g-1 at pH of 10, a MB concentration of 50 mg L-1, and an adsorption time of 15 min. Adsorption kinetics studies showed that the pseudo-second-order model fitted the experimental data better than the pseudo-first-order and the adsorption process is physical adsorption following the Freundlich adsorption isotherm model. An adsorption mechanism of MB onto Fe3O4/thiamine was also suggested. The synthesized Fe3O4/ thiamine particles could be a potential material for treating wastewater
    corecore